Search Results: "bap"

7 March 2017

Daniel Stender: Remotely deploy a WSGI application (as a Debian package) with Ansible

This is a mini workshop as an introduction into using Ansible for the administration of Debian systems. As an example it s shown how this configuration management tool can be used to remotely set up a simple WSGI application running on an Apache web server on a Debian installation to make it available on the net. The application which is used as an example is httpbin by Runscope. This is an useful HTTP request service for the development of web software or any other purposes which features a number of specific endpoints that can be used for different testing matters. For example, the address http://<address>/user-agent of httpbin returns the user agent identification of the client program which has been used to query it (that s taken from the header of the request). There are official instances of this request server running on the net, like the one at http://httpbin.org/. WSGI is a widespread standard for programming web application in Python, and httpbin is implemented in Python using the Flask web framework. The basis of the workshop is a simple base installation of an up-to-date Debian 8 Jessie on a demonstration host, and the latest official release of that is 8.7. As a first step, the installation has to be switched over to the testing branch of Debian, because the Debian packages of httpbin are comparatively new and are going to be introduced into the stable branch of the archive the first time with the upcoming major release number 9 Stretch . After that, the Apache packages which are needed to make it available (apache2 and libapache2-mod-wsgi other web servers of course could be used instead), and which are not part of a base installation, are installed from the archive. The web server then gets launched remotely, and the httpbin package will be also pulled and the service is going to be integrated into Apache to run on that. To achieve that, two configuration files must be deployed on the target system, and a few additional operations are needed to get everything working together. Every step is preconfigured within Ansible so that the whole process could be launched by a single command on the control node, and could be run on a single or a number of comparable target machines automatically and reproducibly. If a server is needed for trying this workshop out, straightforward cloud server instances are available on the net for example at DigitalOcean, but let me underline this there are other cloud providers which offer the same things, too! If it s needed for experiments or other purposes only for a limited time, low priced droplets are available here which are billed by the hour. After being registered, the machine(s) which is/are wanted could be set up easily over the web interface (choose Debian 8.7 as OS), but there are also command line clients available like doctl (which is not yet available as a Debian package). For the convenient use of a droplet the user should generate a SSH key pair on the local machine, first:
$ ssh-keygen -t rsa -b 4096 -C "john@doe.com" -f ~/.ssh/mykey
The public part of the key ~/.ssh/mykey.pub then can be uploaded into the user account before the droplet is going to be created, it then could be integrated automatically. There is a good introduction on the whole process available in the excellent tutorial series serversforhackers.com, here. Ansible then can use the SSH keypair to login into a droplet without the need to type in the password every time. On a cloud server like this carrying a Debian base system, the examples in this workshop can be tried well. Ansible works client-less and doesn t need to be installed on the remote system but only on the control node, however a Python 2.7 interpreter is needed there (the base system of DigitalOcean includes that). For that Ansible can do anything on them, remote servers which are going to be controlled must be added to /etc/ansible/hosts. This is a configuration file in the INI format for DNS names and IP addresses. For a flexible organisation of the server inventory it s possible to group hosts here, IP ranges could be given, and optional variables can be used among other useful things (the default file contains a couple of examples). One or a couple of servers (in Ansible they are called hosts ) on which something particular is going to be happening (like httpbin is going to be installed) could be added like this (the group name is arbitrary):
[httpbin]
192.0.2.0
Whether Ansible could communicate with the hosts in the group and actually can operate on them can be verified by just pinging them like this:
$ ansible httpbin -m ping -u root --private-key=~/.ssh/mykey
192.0.2.0   SUCCESS =>  
    "changed": false, 
    "ping": "pong"
 
The command succeeded well, so it appears there isn t no significant problem regarding this machine. The return value changed:false indicates that there haven t been any changes on that host as a result of the execution of this command. Next to ping there are several other modules which could be used with the command line tool ansible the same way, and these modules are actually something like the core components of Ansible. The module shell for example can be used to execute shell commands on the remote machine like uname to get some system information returned from the server:
$ ansible httpbin -m shell -a "uname -a" -u root --private-key=~/.ssh/mykey
192.0.2.0   SUCCESS   rc=0 >>
Linux debian-512mb-fra1-01 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u2 (2016-10-19) x86_64 GNU/Linux
In the same way, the module apt could be used to remotely install packages. But with that there s no major advantage over other software products that offer a similar functionality, and using those modules on the command line is just the basics of Ansible usage. Playbooks in Ansible are YAML scripts for the manipulation of the registered hosts in /etc/ansible/hosts. Different tasks can be defined here for successive processing, like a simple playbook for changing the package source from stable to testing for example goes like this:
---
 - hosts: httpbin
   tasks:
   - name: remove "jessie" package source
     apt_repository: repo='deb http://mirrors.digitalocean.com/debian jessie main' state=absent
   - name: add "testing" package source
     apt_repository: repo='deb http://httpredir.debian.org/debian testing main contrib non-free' state=present
   - name: upgrade packages
     apt: update_cache=yes upgrade=dist
First, like used with the CLI tool ansible above, the targeted host group httpbin is chosen. The default user root and the SSH key could be fixed here, too, to spare the need to give them on the command line. Then there are three tasks defined to get worked down consecutively: With the module apt_repository the preset package source jessie is removed from /etc/apt/sources.list. Then, a new package source for the testing archive gets added to /etc/apt/sources.list.d/ by using the same module (by the way, mirrors.digitalocean.org also provides testing, though, and that might be faster). After that, the apt module is used to upgrade the package inventory (it performs apt-get dist-upgrade), after an update of the package cache has taken place (by running apt-get update) A playbook like this (the filename is arbitrary, but commonly carries the suffix .yml) can be run by the CLI tool ansible-playbook, like this:
$ ansible-playbook httpbin.yml -u root --private-key=~/.ssh/mykey
Ansible then works down the individual plays of the tasks on the remote server(s) top-down, and due to a high speed net connection and SSD block device hardware the change of the system to being a Debian Testing base installation only takes around a minute to complete in the cloud. While working, Ansible puts out status reports for the individual operations. If certain changes on the base system have been taken place already like when a playbook is run through one more time, the modules of course sense that and return just the information that the system haven t been changed because it s already there what have been wanted to change to. Beyond the basic playbook which is shown here there are more advanced features like register and when available to bind the execution of a play to the error-free result of a previous one. The apt module then can be used in the playbook to install the three needed binary packages one after another:
   - name: install apache2
     apt: pkg=apache2 state=present
   - name: install mod_wsgi
     apt: pkg=libapache2-mod-wsgi state=present
   - name: install httpbin
     apt: pkg=python-httpbin state=present
The Debian packages are configured in a way that the Apache web server is running immediately after installation, and the Apache module mod_wsgi is automatically integrated. If that would be otherwise desired, there are Ansible modules available for operating on Apache which can reverse this if that is wanted. By the way, after the package have been installed the httpbin server can be launched with python -m httpbin.core, but this runs only a mini web server which is not suitable for productive use. To get httpbin running on the Apache web server two configuration files are needed. They could be set up in the project directory on the control node and then uploaded onto the remote machine with another Ansible module. The file httpbin.wsgi (the name is again arbitrary) contains only a single line which is the starter for the WSGI application to run:
from httpbin import app as application
The module copy can be used to deploy that script on the host, while the target folder /var/www/httpbin must be set up before that by the module file. In addition to that, a separate user account like httpbin (the name is also arbitrary but picked up in the other config file) is needed to run it, and the module user can set this up. The demonstrational playbook continues, and the plays which are performing these three operations are going like this:
   - name: mkdir /var/www/httpbin
     file: path=/var/www/httpbin state=directory
   - name: set up user "httpbin"
     user: name=httpbin
   - name: copy WSGI starter
     copy: src=httpbin.wsgi dest=/var/www/httpbin/httpbin.wsgi owner=httpbin group=httpbin mode=0644 
Another configuration script httpbin.conf is needed for Apache on the remote server to include the WSGI application httpbin running as a virtual host. It goes like this:
<VirtualHost *>
 WSGIDaemonProcess httpbin user=httpbin group=httpbin threads=5
 WSGIScriptAlias / /var/www/httpbin/httpbin.wsgi
 <Directory /var/www/httpbin>
  WSGIProcessGroup httpbin
  WSGIApplicationGroup % GLOBAL 
  Order allow,deny
  Allow from all
 </Directory>
</VirtualHost>
This file needs to be copied into the folder /etc/apache2/sites-available on the host, which already exists when the apache2 package is installed. The remaining operations which are missing to get anything running together are: The default welcome screen of Apache blocks anything else and should be disabled by Apache s CLI tool a2dissite. And after that, the new virtual host needs to be activated with the complementary tool a2ensite both could be run remotely by the module command. Then the Apache server on the remote machine must be restarted to read in the new configuration. You ve guessed it already, that s all easy to perform with Ansible:
   - name: deploy configuration script
     copy: src=httpbin.conf dest=/etc/apache2/sites-available owner=root group=root mode=0644
   - name: deactivate default welcome screen
     command: a2dissite 000-default.conf
     
   - name: activate httpbin virtual host
     command: a2ensite httpbin.conf
   - name: restart Apache
     service: name=apache2 state=restarted 
That s it. After this playbook has been performed by Ansible on a (or several) freshly set up remote Debian base installation completely, then the httpbin request server is available running on the Apache web server and could be queried from anywhere by a web browser, or for example by curl:
$ curl http://192.0.2.0/user-agent
 
  "user-agent": "curl/7.50.1"
 
With the broad set of Ansible modules and the playbooks a lot of tasks can be accomplished like the example problem which has been explained here. But the range of functions of Ansible however is still even more comprehensive, but to discuss that would have blown the frame of this blog post. For example the playbooks offer more advanced features like event handler which can be used for recurring operations like the restart of Apache in more extensive projects. And beyond playbooks, templates could be set up in the roles which can behave differently on selected machine groups Ansible uses Jinja2 as template engine for that. And the scope of functions of the basic modules could be expanded by employing external tools. To drop a word on why it could be useful in certain situations to run own instances of the httpbin request server instead of using the official ones which are provided on the net by Runscope: Like some people would prefer to run a private instance for example in the local network instead of querying the one on the internet. Or for some development reasons a couple or even a large number of identical instances might be needed Ansible is ideal for setting them up automatically. Anyway, the Javascript bindings to the tracking services like Google Analytics in httpbin/templates/trackingscripts.html are patched out in the Debian package. That could be another reason to prefer to set up an own instance on a Debian server.

15 February 2017

Daniel Stender: APT programming snippets for Debian system maintenance

The Python API for the Debian package manager APT is useful for writing practical system maintenance scripts, which are going beyond shell scripting capabilities. There are Python2 and Python3 libraries for that available as packages, as well as a documentation in the package python-apt-doc. If that s also installed, the documentation then could be found in /usr/share/doc/python-apt-doc/html/index.html, and there are also a couple of example scripts shipped into /usr/share/doc/python-apt-doc/examples. The libraries mainly consists of Python bindings for the libapt-inst and libapt-pkg C++ core libraries of the APT package manager, which makes it processing very fast. Debugging symbols are also available as packages (python ,3 -apt-dbg). The module apt_inst provides features like reading from binary packages, while apt_pkg resembles the functions of the package manager. There is also the apt abstraction layer which provides more convenient access to the library, like apt.cache.Cache() could be used to behave like apt-get:
from apt.cache import Cache
mycache = Cache()
mycache.update()                   # apt-get update
mycache.open()                     # re-open
mycache.upgrade(dist_upgrade=True) # apt-get dist-upgrade
mycache.commit()                   # apply

boil out selections As widely known, there is a feature of dpkg which helps to move a package inventory from one installation to another by just using a text file with a list of installed packages. A selections list containing all installed package could be easily generated with $ dpkg --get-selections > selections.txt. The resulting file then looks something similar like this:
$ cat selections.txt
0ad                                 install
0ad-data                            install
0ad-data-common                     install
a2ps                                install
abi-compliance-checker              install
abi-dumper                          install
abigail-tools                       install
accountsservice                     install
acl                                 install
acpi                                install
The counterpart for this operation (--set-selections) could be used to reinstall (add) the complete package inventory on another installation resp. computer (that needs superuser rights), like that s explained in the manpage dpkg(1). No problem so far. The problem is, if that list contains a package which couldn t be found in any of the package inventories which are set up in /etc/apt/sources.list(.d/) on the target system, dpkg stops the whole process:
# dpkg --set-selections < selections.txt
dpkg: warning: package not in database at line 524: google-chrome-beta
dpkg: warning: found unknown packages; this might mean the available database
is outdated, and needs to be updated through a frontend method
Thus, manually downloaded and installed wild packages from unofficial package sources are problematic for this approach, because the package installer simply doesn t know where to get them. Luckily, dpkg puts out the relevant package names, but instead of having them removed manually with an editor this little Python script for python3-apt automatically deletes any of these packages from a selections file:
#!/usr/bin/env python3
import sys
import apt_pkg
apt_pkg.init()
cache = apt_pkg.Cache()
infile = open(sys.argv[1])
outfile_name = sys.argv[1] + '.boiled'
outfile = open(outfile_name, "w")
for line in infile:
    package = line.split()[0]
    if package in cache:
        outfile.write(line)
infile.close()
outfile.close()
sys.exit(0)
The script takes one argument which is the name of the selections file which has been generated by dpkg. The low level module apt_pkg first has to been initialized with apt_pkg.init(). Then apt_pkg.Cache() can be used to instantiate a cache object (here: cache). That object is iterable, thus it s easy to not perform something if a package from that list couldn t be found in the database, like not copying the corresponding line into the outfile (.boiled), while the others are copied. The result then looks something like this:
$ diff selections.txt selections.txt.boiled 
3780d3779
< python-timemachine   install
4438d4436
< wlan-supercracker    install
That script might be useful also for moving from one distribution resp. derivative to another (like from Ubuntu to Debian). For productive use, open() should be of course secured against FileNotFound and IOError-s to prevent program crashs on such events.

purge rc-s Like also widely known, deinstalled packages leave stuff like configuration files, maintainer scripts and logs on the computer, to save that if the package gets reinstalled at some point in the future. That happens if dpkg has been used with -r/--remove instead of -P/--purge, which also removes these files which are left otherwise. These packages are then marked as rc in the package archive, like:
$ dpkg -l   grep ^rc
rc  firebird2.5-common          2.5.6.27020.ds4-3   amd64   common files for firebird 2.5 servers and clients
rc  firebird2.5-server-common   2.5.6.27020.ds4-3   amd64   common files for firebird 2.5 servers
rc  firebird3.0-common          3.0.1.32609.ds4-8   all     common files for firebird 3.0 server, client and utilities
rc  imagemagick-common          8:6.9.6.2+dfsg-2    all     image manipulation programs -- infrastructure dummy package
It could be purged over them afterwards to completely remove them from the system. There are several shell coding snippets to be found on the net for completing this job automatically, like this one here:
dpkg -l   grep "^rc"   sed  e "s/^rc //"  e "s/ .*$//"   \
xargs dpkg  purge
The first thing which is needed to handle this by a Python script is the information that in apt_pkg, the package state rc per default is represented by the code 5:
>>> testpackage = cache['firebird2.5-common']
>>> testpackage.current_state
5
For changing things in the database apt_pkg.DepCache() could be docked onto an cache object to manipulate the installation state of a package within, like marking it to be removed resp. purged:
>>> mydepcache = apt_pkg.DepCache(mycache)
>>> mydepcache.mark_delete(testpackage, True) # True = purge
>>> mydepcache.marked_delete(testpackage)
True
That s basically all which is needed for an old package purging maintenance script in Python 3, another iterator as package filter and there you go:
#!/usr/bin/env python3
import sys
import apt_pkg
from apt.progress.text import AcquireProgress
from apt.progress.base import InstallProgress
acquire = AcquireProgress()
install = InstallProgress()
apt_pkg.init()
cache = apt_pkg.Cache()
depcache = apt_pkg.DepCache(cache)
for paket in cache.packages:
    if paket.current_state == 5:
        depcache.mark_delete(paket, True)
depcache.commit(acquire, install)
The method DepCache.commit() applies the changes to the package archive at the end, and it needs apt_progress to perform. Of course this script needs superuser rights to run. It then returns something like this:
$ sudo ./rc-purge 
Reading package lists... Done
Building dependency tree
Reading state information... Done
Fetched 0 B in 0s (0 B/s)
custom fork found
got pid: 17984
got pid: 0
got fd: 4
(Reading database ... 701434 files and directories currently installed.)
Purging configuration files for libmimic0:amd64 (1.0.4-2.3) ...
Purging configuration files for libadns1 (1.5.0~rc1-1) ...
Purging configuration files for libreoffice-sdbc-firebird (1:5.2.2~rc2-2) ...
Purging configuration files for vlc-nox (2.2.4-7) ...
Purging configuration files for librlog5v5 (1.4-4) ...
Purging configuration files for firebird3.0-common (3.0.1.32609.ds4-8) ...
Purging configuration files for imagemagick-common (8:6.9.6.2+dfsg-2) ...
Purging configuration files for firebird2.5-server-common (2.5.6.27020.ds4-3)
It s not yet production ready (like there s an infinite loop if dpkg returns error code 1 like from can t remove non empty folder ). But generally, ATTENTION: be very careful with typos and other mistakes if you want to use that code snippet, a false script performing changes on the package database might destroy the integrity of your system, and you don t want that to happen.

detect wild packages Like said above, installed Debian packages might be called wild if they have been downloaded from somewhere on the net and installed manually, like that is done from time to time on many systems. If you want to remove that whole class of packages again for any reason, the question would be how to detect them. A characteristic element is that there is no source connected to such a package, and that could be detected by Python scripting using again the bindings for the APT libraries. The package object doesn t have an associated method to query its source, because the origin is always connected to a specific package version, like some specific version might have come from security updates for example. The current version of a package can be queried with DepCache.get_candidate_ver() which returns a complex apt_pkg.Version object:
>>> import apt_pkg
>>> apt_pkg.init()
>>> mycache = apt_pkg.Cache()
Reading package lists... Done
Building dependency tree
Reading state information... Done
>>> mydepcache = apt_pkg.DepCache(mycache)
>>> testpackage = mydepcache.get_candidate_ver(mycache['nano'])
>>> testpackage
<apt_pkg.Version object: Pkg:'nano' Ver:'2.7.4-1' Section:'editors'  Arch:'amd64' Size:484790 ISize:2092032 Hash:33578 ID:31706 Priority:2>
For version objects there is the method file_list available, which returns a list containing PackageFile() objects:
>>> testpackage.file_list
[(<apt_pkg.PackageFile object: filename:'/var/lib/apt/lists/httpredir.debian.org_debian_dists_testing_main_binary-amd64_Packages'  a=testing,c=main,v=,o=Debian,l=Debian arch='amd64' site='httpredir.debian.org' IndexType='Debian Package Index' Size=38943764 ID:0>, 669901L)]
These file objects contain the index files which are associated with a specific package source (a downloaded package index), which could be read out easily (using a for-loop because there could be multiple file objects):
>>> for files in testpackage.file_list:
...     print(files[0].filename)
/var/lib/apt/lists/httpredir.debian.org_debian_dists_testing_main_binary-amd64_Packages
That explains itself: the nano binary package on this amd64 computer comes from httpredir.debian.org/debian testing main. If a package is wild that means it was installed manually, so there is no associated index file to be found, but only /var/lib/dpkg/status (libcudnn5 is not in the official package archives but distributed by Nvidia as a .deb package):
>>> testpackage2 = mydepcache.get_candidate_ver(mycache['libcudnn5'])
>>> for files in testpackage2.file_list:
...     print(files[0].filename)
/var/lib/dpkg/status
The simple trick now is to find all packages which have only /var/lib/dpkg/status as associated system file (that doesn t refer to what packages contain), an not an index file representing a package source. There s a little pitfall: that s truth also for virtual packages. But virtual packages commonly don t have an associated version (python-apt docs: to check whether a package is virtual; that is, it has no versions and is provided at least once ), and that can be queried by Package.has_versions. A filter to find out any packages that aren t virtual packages, are solely associated to one system file, and that file is /var/lib/dpkg/status, then goes like this:
for package in cache.packages:
    if package.has_versions:
        version = mydepcache.get_candidate_ver(package)
        if len(version.file_list) == 1:
            if 'dpkg/status' in version.file_list[0][0].filename:
                print(package.name)
On my Debian testing system this puts out a quite interesting list. It lists all the wild packages like libcudnn5, but also packages which are recently not in testing because they have been temporarily removed by AUTORM due to release critical bugs. Then there s all the obsolete stuff which have been installed from the package archives once and then being forgotten like old kernel header packages ( obsolete packages in dselect). So this snippet brings up other stuff, too. Thus, this might be more experimental stuff so far, though.

1 January 2017

Joey Hess: p2p dreams

In one of the good parts of the very mixed bag that is "Lo and Behold: Reveries of the Connected World", Werner Herzog asks his interviewees what the Internet might dream of, if it could dream. The best answer he gets is along the lines of: The Internet of before dreamed a dream of the World Wide Web. It dreamed some nodes were servers, and some were clients. And that dream became current reality, because that's the essence of the Internet. Three years ago, it seemed like perhaps another dream was developing post-Snowden, of dissolving the distinction between clients and servers, connecting peer-to-peer using addresses that are also cryptographic public keys, so authentication and encryption and authorization are built in. Telehash is one hopeful attempt at this, others include snow, cjdns, i2p, etc. So far, none of them seem to have developed into a widely used network, although any of them still might get there. There are a lot of technical challenges due to the current Internet dream/nightmare, where the peers on the edges have multiple barriers to connecting to other peers. But, one project has developed something similar to the new dream, almost as a side effect of its main goals: Tor's onion services. I'd wanted to use such a thing in git-annex, for peer-to-peer sharing and syncing of git-annex repositories. On November 13th, I started building it, using Tor, and I'm releasing it concurrently with this blog post.
git-annex's Tor support replaces its old hack of tunneling git protocol over XMPP. That hack was unreliable (it needed a TCP on top of XMPP layer) but worse, the XMPP server could see all the data being transferred. And, there are fewer large XMPP servers these days, so fewer users could use it at all. If you were using XMPP with git-annex, you'll need to switch to either Tor or a server accessed via ssh.
Now git-annex can serve a repository as a Tor onion service, and that can then be accessed as a git remote, using an url like tor-annex::tungqmfb62z3qirc.onion:42913. All the regular git, and git-annex commands, can be used with such a remote. Tor has a lot of goals for protecting anonymity and privacy. But the important things for this project are just that it has end-to-end encryption, with addresses that are public keys, and allows P2P connections. Building an anonymous file exchange on top of Tor is not my goal -- if you want that, you probably don't want to be exchanging git histories that record every edit to the file and expose your real name by default. Building this was not without its difficulties. Tor onion services were originally intended to run hidden websites, not to connect peers to peers, and this kind of shows.. Tor does not cater to end users setting up lots of Onion services. Either root has to edit the torrc file, or the Tor control port can be used to ask it to set one up. But, the control port is not enabled by default, so you still need to su to root to enable it. Also, it's difficult to find a place to put the hidden service's unix socket file that's writable by a non-root user. So I had to code around this, with a git annex enable-tor that su's to root and sets it all up for you.
One interesting detail about the implementation of the P2P protocol in git-annex is that it uses two Free monads to build up actions. There's a Net monad which can be used to send and receive protocol messages, and a Local monad which allows only the necessary modifications to files on disk. Interpreters for Free monad actions can chose exactly which actions to allow for security reasons. For example, before a peer has authenticated, the P2P protocol is being run by an interpreter that refuses to run any Local actions whatsoever. Other interpreters for the Net monad could be used to support other network transports than Tor.
When two peers are connected over Tor, one knows it's talking to the owner of a particular onion address, but the other peer knows nothing about who's talking to it, by design. This makes authentication harder than it would be in a P2P system with a design like Telehash. So git-annex does its own authentication on top of Tor. With authentication, users would need to exchange absurdly long addresses (over 150 characters) to connect their repositories. One very convenient thing about using XMPP was that a user would have connections to their friend's accounts, so it was easy to share with them. Exchanging long addresses is too hard. This is where Magic Wormhole saved the day. It's a very elegant way to get any two peers in touch with each other, and the users only have to exchange a short code phrase, like "2-mango-delight", which can only be used once. Magic Wormhole makes some security tradeoffs for this simplicity. It's got vulnerabilities to DOS attacks, and its MITM resistance could be improved. But I'm lucky it came along just in time. So, it takes only installing Tor and Magic Wormhole, running two git-annex commands, and exchanging short code phrases with a friend, perhaps over the phone or in an encrypted email, to get your git-annex repositories connected and syncing over Tor. See the documentation for details. Also, the git-annex webapp allows setting the same thing up point-and-click style. The Tor project blog has throughout December been featuring all kinds of projects that are using Tor. Consider this a late bonus addition to that. ;) I hope that Tor onion services will continue to develop to make them easier to use for peer-to-peer systems. We can still dream a better Internet.
This work was made possible by all my supporters on Patreon.

20 December 2016

Reproducible builds folks: Reproducible Builds: week 86 in Stretch cycle

What happened in the Reproducible Builds effort between Sunday December 11 and Saturday December 17 2016: Reproducible builds world summit The 2nd Reproducible Builds World Summit was held in Berlin, Germany on December 13th-15th. The event was a great success with enthusiastic participation from an extremely diverse number of projects. Many thanks to our sponsors for making this event possible! Reproducible Summit 2 in Berlin 2016 Whilst there is an in-depth report forthcoming, the Guix project have already released their own report. Media coverage Reproducible work in other projects Documentation update A large number of revisions were made to the website during the summit, including re-structuring existing content and creating a concrete plan to move the wiki content to the website: Elsewhere in Debian Packages reviewed and fixed, and bugs filed Chris Lamb: Daniel Shahaf: Reiner Herrmann: Reviews of unreproducible packages 9 package reviews have been added, 19 have been updated and 17 have been removed in this week, adding to our knowledge about identified issues. 3 issue types have been added: One issue type was updated: Weekly QA work During our reproducibility testing, some FTBFS bugs have been detected and reported by: diffoscope development reprotest development trydiffoscope development Misc. This week's edition was written by Chris Lamb and reviewed by a bunch of Reproducible Builds folks on IRC and via email.

8 December 2016

Dirk Eddelbuettel: RcppAPT 0.0.3

A new version of RcppAPT -- our interface from R to the C++ library behind the awesome apt, apt-get, apt-cache, ... commands and their cache powering Debian, Ubuntu and the like -- is now on CRAN. We changed the package to require C++11 compilation as newer Debian systems with g++-6 and the current libapt-pkg-dev library cannot build under the C++98 standard which CRAN imposes (and let's not get into why ...). Once set to C++11 we have no issues. We also added more examples to the manual pages, and turned on code coverage. A bit more information about the package is available here as well as as the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

30 November 2016

Chris Lamb: Free software activities in November 2016

Here is my monthly update covering what I have been doing in the free software world (previous month):
Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users. The motivation behind the Reproducible Builds effort is to permit verification that no flaws have been introduced either maliciously or accidentally during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

This month:

My work in the Reproducible Builds project was also covered in our weekly reports. (#80, #81, #82 #83.

Toolchain issues I submitted the following patches to fix reproducibility-related toolchain issues with Debian:

strip-nondeterminism

strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build.


jenkins.debian.net

jenkins.debian.net runs our comprehensive testing framework.

  • buildinfo.debian.net has moved to SSL. (ac3b9e7)
  • Submit signing keys to keyservers after generation. (bdee6ff)
  • Various cosmetic changes, including
    • Prefer if X not in Y over if not X in Y. (bc23884)
    • No need for a dictionary; let's just use a set. (bf3fb6c)
    • Avoid DRY violation by using a for loop. (4125ec5)

I also submitted 9 patches to fix specific reproducibility issues in apktool, cairo-5c, lava-dispatcher, lava-server, node-rimraf, perlbrew, qsynth, tunnelx & zp.

Debian

Debian LTS This month I have been paid to work 11 hours on Debian Long Term Support (LTS). In that time I did the following:
  • "Frontdesk" duties, triaging CVEs, etc.
  • Issued DLA 697-1 for bsdiff fixing an arbitrary write vulnerability.
  • Issued DLA 705-1 for python-imaging correcting a number of memory overflow issues.
  • Issued DLA 713-1 for sniffit where a buffer overflow allowed a specially-crafted configuration file to provide a root shell.
  • Issued DLA 723-1 for libsoap-lite-perl preventing a Billion Laughs XML expansion attack.
  • Issued DLA 724-1 for mcabber fixing a roster push attack.

Uploads
  • redis:
    • 3.2.5-2 Tighten permissions of /var/ lib,log /redis. (#842987)
    • 3.2.5-3 & 3.2.5-4 Improve autopkgtest tests and install upstream's MANIFESTO and README.md documentation.
  • gunicorn (19.6.0-9) Adding autopkgtest tests.
  • libfiu:
    • 0.94-1 Add autopkgtest tests.
    • 0.95-1, 0.95-2 & 0.95-3 New upstream release and improve autopkgtest coverage.
  • python-django (1.10.3-1) New upstream release.
  • aptfs (0.8-3, 0.8-4 & 0.8-5) Adding and subsequently improving the autopkgtext tests.


I performed the following QA uploads:


Finally, I also made the following non-maintainer uploads:
  • libident (0.22-3.1) Move from obsolete Source-Version substvar to binary:Version. (#833195)
  • libpcl1 (1.6-1.1) Move from obsolete Source-Version substvar to binary:Version. (#833196)
  • pygopherd (2.0.18.4+nmu1) Move from obsolete Source-Version substvar to $ source:Version . (#833202)


RC bugs


I also filed 59 FTBFS bugs against arc-gui-clients, asyncpg, blhc, civicrm, d-feet, dpdk, fbpanel, freeciv, freeplane, gant, golang-github-googleapis-gax-go, golang-github-googleapis-proto-client-go, haskell-cabal-install, haskell-fail, haskell-monadcatchio-transformers, hg-git, htsjdk, hyperscan, jasperreports, json-simple, keystone, koji, libapache-mod-musicindex, libcoap, libdr-tarantool-perl, libmath-bigint-gmp-perl, libpng1.6, link-grammar, lua-sql, mediatomb, mitmproxy, ncrack, net-tools, node-dateformat, node-fuzzaldrin-plus, node-nopt, open-infrastructure-system-images, open-infrastructure-system-images, photofloat, ppp, ptlib, python-mpop, python-mysqldb, python-passlib, python-protobix, python-ttystatus, redland, ros-message-generation, ruby-ethon, ruby-nokogiri, salt-formula-ceilometer, spykeviewer, sssd, suil, torus-trooper, trash-cli, twisted-web2, uftp & wide-dhcpv6.

FTP Team

As a Debian FTP assistant I ACCEPTed 70 packages: bbqsql, coz-profiler, cross-toolchain-base, cross-toolchain-base-ports, dgit-test-dummy, django-anymail, django-hstore, django-html-sanitizer, django-impersonate, django-wkhtmltopdf, gcc-6-cross, gcc-defaults, gnome-shell-extension-dashtodock, golang-defaults, golang-github-btcsuite-fastsha256, golang-github-dnephin-cobra, golang-github-docker-go-events, golang-github-gogits-cron, golang-github-opencontainers-image-spec, haskell-debian, kpmcore, libdancer-logger-syslog-perl, libmoox-buildargs-perl, libmoox-role-cloneset-perl, libreoffice, linux-firmware-raspi3, linux-latest, node-babel-runtime, node-big.js, node-buffer-shims, node-charm, node-cliui, node-core-js, node-cpr, node-difflet, node-doctrine, node-duplexer2, node-emojis-list, node-eslint-plugin-flowtype, node-everything.js, node-execa, node-grunt-contrib-coffee, node-grunt-contrib-concat, node-jquery-textcomplete, node-js-tokens, node-json5, node-jsonfile, node-marked-man, node-os-locale, node-sparkles, node-tap-parser, node-time-stamp, node-wrap-ansi, ooniprobe, policycoreutils, pybind11, pygresql, pysynphot, python-axolotl, python-drizzle, python-geoip2, python-mockupdb, python-pyforge, python-sentinels, python-waiting, pythonmagick, r-cran-isocodes, ruby-unicode-display-width, suricata & voctomix-outcasts. I additionally filed 4 RC bugs against packages that had incomplete debian/copyright files against node-cliui, node-core-js, node-cpr & node-grunt-contrib-concat.

2 November 2016

Markus Koschany: My Free Software Activities in October 2016

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you re interested in Android, Java, Games and LTS topics, this might be interesting for you. Debian Android Debian Games Debian Java Debian LTS This was my eight month as a paid contributor and I have been paid to work 13 hours on Debian LTS, a project started by Rapha l Hertzog. In that time I did the following: Non-maintainer uploads QA

31 October 2016

Enrico Zini: Links for November 2016

So You Want To Learn Physics... [archive]
This post is a condensed version of what I've sent to people who have contacted me over the years, outlining what everyone needs to learn in order to really understand physics.
Operation Tamarisk [archive]
Operation Tamarisk was a Cold War-era operation run by the military intelligence services of the U.S., U.K., and France through their military liaison missions in East Germany, that gathered discarded paper, letters, and garbage from Soviet trash bins and military maneuvers, including used toilet paper.
Mortara case
Of how in Bologna, where I live, in the 1850s/1860s, when my grand-grand-granddad lived, the Papal State seized a child from a Jewish family on the basis of a former servant's testimony that she had administered emergency baptism to the boy when he fell sick as an infant.

29 October 2016

Jaldhar Vyas: Dawkins Weasel

Happy Dhanteras from Bappy Lahiri
It's already Dhan Terash so I better pick up the pace if I want to beat my blogging challenge before Diwali so in this post I'll discuss a program I wrote earlier this year.
I dread to look up anything on Wikipedia because I always end up going down a rabbit hole and surfacing hours later on a totally unrelated topic. Case in point, some months ago, I ended up on the page of the title. This is an interesting little experiment illustrating how random selection can result in the evolution of a specific form. The algorithm is:

  1. Start with a random string of 28 characters.
  2. Make 100 copies of this string, with a 5% chance per character of that character being replaced with a random character.
  3. Compare each new string with "METHINKS IT IS LIKE A WEASEL", and give each a score (the number of letters in the string that are correct and in the correct position).
  4. If any of the new strings has a perfect score (== 28), halt.
  5. Otherwise, take the highest scoring string, and go to step 2.
I had to try this myself so I wrote a little implementation in C++. A sample run looks like this:
  
$ ./weasel
0000 DNCFICBLUZVC JF KKNVJJASCJRW (0)
0001 DNIFICOLUZVC JFLIKNVAJASCJEW (6)
0002 DNNWICKSUZVCRSFLIKNVA ASCJEL (11)
0003 DNNWICKSUZVCRSFLIKNVA ASCJEL (11)
0004 MNNVICKSQZVCRSFLIKNVA WSCJEL (13)
0005 MENVICKSQZVCRSFLIKNVA WSCJEL (14)
0006 MENVISKS ZTCRSFLIKNVA WLCJEL (16)
0007 MENVISKS ZTCRSFLIKNVA WLCJEL (16)
0008 MEDHISKS ZTCISFLIKNVA WLCJEL (18)
0009 MEDHISKS ZTCISFLIKNVA WLCJEL (18)
0010 MEDHISKS ZTCISFLIKNVA WLCJEL (18)
0011 MEDHISKS ZTCIS LIKTKA WLCZEL (19)
0012 MEDHISKS ZTCIS LIKTKA WLCZEL (19)
0013 MEDHISKS ZTCIS LIKT A WLCZEL (20)
0014 MEDHISKS ZTCIS LIKT A WLCZEL (20)
0015 MEDHISKS ZTCIS LIKE A WLAZEL (22)
0016 MEDHIGKS ITCIS LIKE A WLAZEL (23)
0017 MEDHIGKS ITCIS LIKE A WLAZEL (23)
0018 MEDHIGKS ITCIS LIKE A WLAZEL (23)
0019 MEDHIGKS ITCIS LIKE A WLAZEL (23)
0020 MEDHIGKS ITCIS LIKE A WLAZEL (23)
0021 MEDHIGKS ITCIS LIKE A WLAZEL (23)
0022 METHINKS ITCIS LIKE A WLASEL (26)
0023 METHINKS ITCIS LIKE A WLASEL (26)
0024 METHINKS ITCIS LIKE A WLASEL (26)
0025 METHINKS ITCIS LIKE A WEASEL (27)
0026 METHINKS ITCIS LIKE A WEASEL (27)
0027 METHINKS ITCIS LIKE A WEASEL (27)
0028 METHINKS ITCIS LIKE A WEASEL (27)
0029 METHINKS ITCIS LIKE A WEASEL (27)
0030 METHINKS ITCIS LIKE A WEASEL (27)
0031 METHINKS ITCIS LIKE A WEASEL (27)
0032 METHINKS ITCIS LIKE A WEASEL (27)
0033 METHINKS ITCIS LIKE A WEASEL (27)
0034 METHINKS ITCIS LIKE A WEASEL (27)
0035 METHINKS ITCIS LIKE A WEASEL (27)
0036 METHINKS ITCIS LIKE A WEASEL (27)
0037 METHINKS ITCIS LIKE A WEASEL (27)
0038 METHINKS ITCIS LIKE A WEASEL (27)
0039 METHINKS ITCIS LIKE A WEASEL (27)
0040 METHINKS ITCIS LIKE A WEASEL (27)
0041 METHINKS ITCIS LIKE A WEASEL (27)
0042 METHINKS ITCIS LIKE A WEASEL (27)
0043 METHINKS ITCIS LIKE A WEASEL (27)
0044 METHINKS ITCIS LIKE A WEASEL (27)
0045 METHINKS ITCIS LIKE A WEASEL (27)
0046 METHINKS ITCIS LIKE A WEASEL (27)
0047 METHINKS ITCIS LIKE A WEASEL (27)
0048 METHINKS ITCIS LIKE A WEASEL (27)
0049 METHINKS ITCIS LIKE A WEASEL (27)
0050 METHINKS ITCIS LIKE A WEASEL (27)
0051 METHINKS ITCIS LIKE A WEASEL (27)
0052 METHINKS ITCIS LIKE A WEASEL (27)
0053 METHINKS ITCIS LIKE A WEASEL (27)
0054 METHINKS IT IS LIKE A WEASEL (28)

My program lets you adjust the input string, the number of copies, and the mutation threshold. I also thought it might be interesting to implement the Generator design pattern. In C++ this is done by making a class which implements begin() and end() methods and atleast a forward iterator. You can find the source code on Github.

24 October 2016

Reproducible builds folks: Reproducible Builds: week 78 in Stretch cycle

What happened in the Reproducible Builds effort between Sunday October 16 and Saturday October 22 2016: Media coverage Upcoming events buildinfo.debian.net In order to build packages reproducibly, you not only need identical sources but also some external definition of the environment used for a particular build. This definition includes the inputs and the outputs and, in the Debian case, are available in a $package_$architecture_$version.buildinfo file. We anticipate the next dpkg upload to sid will create .buildinfo files by default. Whilst it's clear that we also need to teach dak to deal with them (#763822) its not actually clear how to handle .buildinfo files after dak has processed them and how to make them available to the world. To this end, Chris Lamb has started development on a proof-of-concept .buildinfo server to see what issues arise. Source Reproducible work in other projects Packages reviewed and fixed, and bugs filed Reviews of unreproducible packages 99 package reviews have been added, 3 have been updated and 6 have been removed in this week, adding to our knowledge about identified issues. 6 issue types have been added: Weekly QA work During of reproducibility testing, some FTBFS bugs have been detected and reported by: diffoscope development tests.reproducible-builds.org Misc. Our poll to find a good time for an IRC meeting is still running until Tuesday, October 25st; please reply as soon as possible. We need a logo! Some ideas and requirements for a Reproducible Builds logo have been documented in the wiki. Contributions very welcome, even if simply by forwarding this information. This week's edition was written by Chris Lamb & Holger Levsen and reviewed by a bunch of Reproducible Builds folks on IRC.

8 October 2016

Norbert Preining: Debian/TeX update October 2016: all of TeX Live and Biber 2.6

Finally a new update of many TeX related packages: all the texlive-* including the binary packages, and biber have been updated to the latest release. This upload was delayed by my travels around the world, as well as the necessity to package a new Perl module (libdatetime-calendar-julian-perl) as required by new Biber. Also, my new job leaves me only the weekends for packaging. Anyway, the packages are now uploaded and should appear soon on your friendly local server. texlive2016-debian There are several highlights: The binaries have been patched with several upstream fixes (tex4ht and XeTeX compatibility, as well as various Japanese TeX engine fixes), updated biber and biblatex, and as usual loads of new and updated packages. Last but not least I want to thank one particular author: His package was removed from TeX Live due to the addition of a rather unusual clause in the license. Instead of simply uploading new packages to Debian with the rather important removed, I contacted the author and asked for clarification. And to my great pleasure he immediately answered with an update of the package with fixed license. All of us user of these many packages should be grateful to the authors of the packages who invest loads of their free time into supporting our community. Thanks! Enough now, here as usual the list of new and updated packages with links to their respective CTAN pages. Enjoy. New packages addfont, apalike-german, autoaligne, baekmuk, beamerswitch, beamertheme-cuerna, beuron, biblatex-claves, biolett-bst, cooking-units, cstypo, emf, eulerpx, filecontentsdef, frederika2016, grant, latexgit, listofitems, overlays, phonenumbers, pst-arrow, quicktype, revquantum, richtext, semantic-markup, spalign, texproposal, tikz-page, unfonts-core, unfonts-extra, uspace. Updated packages achemso, acmart, acro, adobemapping, alegreya, allrunes, animate, arabluatex, archaeologie, asymptote, attachfile, babel-greek, bangorcsthesis, beebe, biblatex, biblatex-anonymous, biblatex-apa, biblatex-bookinother, biblatex-chem, biblatex-fiwi, biblatex-gost, biblatex-ieee, biblatex-manuscripts-philology, biblatex-morenames, biblatex-nature, biblatex-opcit-booktitle, biblatex-phys, biblatex-realauthor, biblatex-science, biblatex-true-citepages-omit, bibleref, bidi, chemformula, circuitikz, cochineal, colorspace, comment, covington, cquthesis, ctex, drawmatrix, ejpecp, erewhon, etoc, exsheets, fancyhdr, fei, fithesis, footnotehyper, fvextra, geschichtsfrkl, gnuplottex, gost, gregoriotex, hausarbeit-jura, ijsra, ipaex, jfontmaps, jsclasses, jslectureplanner, latexdiff, leadsheets, libertinust1math, luatexja, markdown, mcf2graph, minutes, multirow, mynsfc, nameauth, newpx, newtxsf, notespages, optidef, pas-cours, platex, prftree, pst-bezier, pst-circ, pst-eucl, pst-optic, pstricks, pstricks-add, refenums, reledmac, rsc, shdoc, siunitx, stackengine, tabstackengine, tagpair, tetex, texlive-es, texlive-scripts, ticket, translation-biblatex-de, tudscr, turabian-formatting, updmap-map, uplatex, xebaposter, xecjk, xepersian, xpinyin. Enjoy.

2 October 2016

Gregor Herrmann: RC bugs 2016/38-39

the last two weeks have seen the migration of perl 5.24 into testing, most of the bugs I worked on were related to it. additionally a few more build dependencies on tzdata werde needed. here's the list:

15 August 2016

Shirish Agarwal: The road to TOR

Happy Independence Day to all. I had been looking forward to this day so I can use to share with my brothers and sisters what little I know about TOR . Independence means so many things to many people. For me, it means having freedom, valuing it and using it to benefit not just to ourselves but to people at large. And for that to happen, at least on the web, it has to rise above censorship if we are to get there at all. I am 40 years old, and if I can t read whatever I want to read without asking the state-military-Corporate trinity than be damned with that. Debconf was instrumental as I was able to understand and share many of the privacy concerns that we all have. This blog post is partly a tribute to being part of a community and being part of Debconf16. So, in that search for privacy couple of years ago, I came across TOR . TOR stands for The Onion Router project. Explaining tor is simple. Let us take the standard way in which we approach the website using a browser or any other means. a. We type out a site name, say debian.org in the URL/URI bar .
b. Now the first thing the browser would do is look into its DNS Cache to see if the name/URL has been used before. If it is something like debian.org which has been used before and is *fresh* and there is content already it would serve the content from the cache there itself.
c. In case, if it s not or the content is stale or something, it would generate a DNS lookup through the various routing tables till the DNS IP Address is found and information relayed to the browser.
d. The browser takes the IP Address and opens a TCP connection to the server, you have the handshake happen and after that it s business as usual.
e. In case if it doesn t work, you could get errors like Could not connect to server xyz or some special errors with error codes. This is a much simplified version of what happens or goes through normally with most/all of the browsers. One good way to see how the whole thing happens is to use traceroute and use the whois service. For e.g. [$] traceroute debian.org and then [$] whois 5.153.231.4 grep inetnum
inetnum: 5.153.231.0 - 5.153.231.255
Just using whois IP Address gives much more. I just shared a short version because I find it interesting that Debian has booked all 255 possible IP Addresses but speculating on that would be probably be a job for a different day. Now the difference when using TOR are two things a. The conversation is encrypted (somewhat like using https but encrypted through the relays)
b. The conversation is relayed over 2-3 relays and it will give a somewhat different identification to the DNS server at the other end.
c. It is only at the end-points that the conversation will be in plain text. For e.g. the TOR connection I m using atm is from me France (relay) Switzerland (relay) Germany (relay) WordPress.com . So wordpress thinks that all the connection is happening via Germany while I m here in India. It would also tells that I m running MS-Windows some version and a different browser while I m from somewhere in India, on Debian, using another browser altogether There are various motivations for doing that. For myself, I m just a private person and do not need or want that any other person/s or even the State should be looking over my shoulder as to what I m doing. And the argument that we need to spy on citizens because Terrorists are there doesn t hold water over me. There are many ways in which they can pass messages even without tor or web. The Government-Corporate-Military just get more powerful if and when they know what common people think, do, eat etc. So the question is how does you install tor if you a private sort of person . If you are on a Debian machine, you are one step closer to doing that. So the first thing that you need to do is install the following $ sudo aptitude install ooniprobe python-certifi tor tor-geoipdb torsocks torbrowser-launcher Once the above is done, then run torbrowser-launcher. This is how it would work out the first time it is run [$] torbrowser-launcher Tor Browser Launcher
By Micah Lee, licensed under MIT
version 0.2.6
https://github.com/micahflee/torbrowser-launcher
Creating GnuPG homedir /home/shirish/.local/share/torbrowser/gnupg_homedir
Downloading and installing Tor Browser for the first time.
Downloading https://dist.torproject.org/torbrowser/update_2/release/Linux_x86_64-gcc3/x/en-US
Latest version: 6.0.3
Downloading https://dist.torproject.org/torbrowser/6.0.3/tor-browser-linux64-6.0.3_en-US.tar.xz.asc
Downloading https://dist.torproject.org/torbrowser/6.0.3/tor-browser-linux64-6.0.3_en-US.tar.xz
Verifying signature
Extracting tor-browser-linux64-6.0.3_en-US.tar.xz
Running /home/shirish/.local/share/torbrowser/tbb/x86_64/tor-browser_en-US/start-tor-browser.desktop
Launching './Browser/start-tor-browser --detach'...
As can be seen above, you basically download the tor browser remotely from the website. Obviously, for this port 80 needs to be opened. One of the more interesting things is that it tells you where it installs the browser. /home/shirish/.local/share/torbrowser/tbb/x86_64/tor-browser_en-US/Browser/start-tor-browser and then detaches. The first time the TOR browser actually runs it looks something similar to this
Torbrowser picture

Torbrowser picture

Additionally it would give you 4 choices. Depending on your need for safety, security and convenience you make a choice and live with it. Now the only thing remaining to do is have an alias for your torbrowser. So I made [$] alias tor tor=/home/shirish/.local/share/torbrowser/tbb/x86_64/tor-browser_en-US/Browser/start-tor-browser It is suggested that you do not use the same usernames on the onion network. Also apart from the regular URL addresses such as flossexperiences.wordpress.com you will also see sites such as https://www.abc12defgh3ijkl.onion.to (fictional address) Now there would be others who would want to use the same/similar settings say as there are in their Mozilla Firefox installation. To do that do the following steps a. First close down both Torbrowser and Mozilla Firefox .
b. Open your file browser and go to where your mozilla profile details are. In typical Debian installations it is at ~/.mozilla/firefox/5r7t1r92.default In the next tab, navigate to ~/.local/share/torbrowser/tbb/x86_64/tor-browser_en-US/Browser/TorBrowser/Data/Browser/profile.default c. Now copy the following files over from your mozilla profile to your tor browser profile and you can resume where you left off.
    cert8.db
    chromeappsstore.sqlite
    content-prefs.sqlite
    cookies.sqlite
    formhistory.sqlite
    key3.db
    logins.json (Firefox 32 and above)
    mimeTypes.rdf
    permissions.sqlite
    persdict.dat
    places.sqlite
    signons3.txt (if exists)
    webappsstore.sqlite
and the following folders/directories
    bookmarkbackups
    chrome (if it exists)
    searchplugins (if it exists)
Once the above is done, fire up your torbrowser with the alias shared. This is usually put it in your .bashrc file or depending on whatever terminal interpreter you use, wherever the config file will be. Welcome to the world of TOR. Now, after a time if you benefit from tor and would like to give back to the tor community, you should look up tor bridges and relay. As the blog post has become long enough, I would end it now and hopefully we can talk about tor bridges and relay some other day.
Filed under: Miscellenous Tagged: #anonymity, #Debconf16, #debian, #tor, #torbrowser, GNU, Linux, Privacy

1 August 2016

Shirish Agarwal: Doha and the past year in APT

A week has gone by and another small sharing about Doha and one package that quite a few of us use everyday but don t think much of them. Let s start with Doha with these two pictures which tells/shares a bit about what the Doha of today is like qatar-1 qatar-2 While I have more than a dozen snapshots of Doha, all of them show same thing, all are huge skyscrapers and overall Doha seems to be aping Dubai and is in a frenzy as World Cup 2022 is around the corner. We did see a few of the older places but these seemed to be more done for tourists rather than the real thing. We saw stuff like wooden_ship This was a picture taken by Ritesh Raj Saraff, a friend and a DD whom I met while I was going to Debconf. The place where this picture has been taken is known as a Souk or what we know as market-place. This was a place where you could get spices. Quite a few of the spices that we get and use in India were bought from Middle-East in the olden times. In fact, it has been argued that the whole Mughlai food that is part of Indian culture was imported from Middle-East when we were trading them before India or Akhand Bharat was invaded. What was interesting to both of us is that we could perceive that most of the buildings had a sort of fakeness to it, they tried to show that it had a lot of detailed work on the buildings but we could see it was all done recently so not that old as being led to believe. One of the other interesting bits that we came to know throughout our stay in Qatar that almost 80-90% of the staff we met inside Qatar airport as well as in the Souk were people from Asian sub-continent and more specifically from South India. I had few interesting conversations with some of the people who were managing the shops were that almost of them were just employees while the owners were Qataris . I could understand this as the distance and flight between Qatar and India is hardly 3 hours. It seemed very similar to how Mexicans look for work in United States. The most expensive thing there was water as it s desert other than housing and most workers seemed to have shared accommodation from anywhere between 5-15 people in one room. It s only the relative strength of the Qatari Rial which probably compels them to be there. The temperature was around 45 degrees with a bit of humidity as it s next to the Ocean. For all the money in the world, I wouldn t work there. It is true that you know your city s worth only when you go outside:) I do have some more stories about Qatar but that would have to wait for another day now. Also, I really don t want to talk much about this part as it s part depressing but probably would explore it a bit in a further blog-post. One of the more interesting topics that I attended was the apt talk . There are 3-4 tools in the Debian world i.e. apt, aptitude, apt-get, dpkg and dselect. More often than not people know aptitude and apt-get whereas the rest of the packages are not thought so much about. What I somewhat suspected about the history of apt was revealed to be true today, courtesy David K. julias-andreas-klose-year-in-apt You can see the talk/video about apt at http://meetings-archive.debian.net/pub/debian-meetings/2016/debconf16/The_past_year_in_APT.webm. I had been curious about apt,libapt,dpkg and the entire tool-chain which goes into updating packages and like. I had a couple of conversations here in India before on mail, in person and IRC as well as couple of conversations in South Africa as well before the APT talk where it was told that packages are not signed or it s not easy to figure out the integrity. Being a Debian fan-boy I could not believe this to be true. Hence I asked and to my dismay found it to be true . I also then asked the same with a bit more background on the mailing list as well and got to know that this has been a concern since 2005. As I do not have the requisite skills and the person would require probably knowledge of dpkg internals as well as have probably good social skills to have at least 1-2 DD s help her/im to work on it and have probably some server space where even some partial archive is re-built using debian packages which use dpkg-sig . I also had some concerns that even if somebody did do the work, it might come in the way of the reproducible builds concept where Neils shared ways in which it could be overcome. Having said the above, it is totally doable if somebody has the will, skills and the patience to do it. Just look at the amazing work done by the team which re-built almost all the archive using clang. See clang.debian.net for the amazing work that they have done. Now, one of the issues in India which comes in popularizing Debian or in fact any free software distribution in India is the bandwidth issue or rather the lack of it or how expensive it is. The situation for better lack of term is pathetic . While nothing can be done till the time the Govt. gives limited term oligopoly licenses to telecom operators and they have a cabal (cabal closed team where decisions and policies are made without any knowledge of and to other stakeholders.) we need to find ways to make the best of the situation. Anyways, while there are some ideas to tackle that but that s a long-term goal and I will share some aspects of it in probably another blog post. In the interim somethings can definitely be made better. Now one of the issues that is there for most people is getting the package updates. Before updating the packages, the package index needs to be updated. Now, both in home and work environments most people are cautious to update the package index. But many times, either due to bandwidth issues or some other issue which is outside your control, your package index is corrupted. I have put both the possible reasons of why and how the package index corruption takes place and a probable work-around of in the deity mail post. I do hope to put in a more coherent state by probably making smaller bug issues so they could be tackled or answered one by one. Any improvements would be better for stability of debian infrastructure only. If anybody does do the required work and need a guinea pig for testing, count me in. Just holler and share you will be working on this aspect and at least one of my workstations would definitely take part in seeing if its better or not. Even if you are able to just provide a way to make a copy of /var/lib/apt/lists after every successful update and do the comparison with time-stamp on next run and only change the copy when a successful update occurs, that will be a huge help in itself. Look forward to hearing form one and all.
Filed under: Miscellenous Tagged: #Debconf16, #feature-request, #Julian Andreas Klose, #shell-script ?, apt, aptitude

Chris Lamb: Free software activities in July 2016

Here is my monthly update covering a large part of what I have been doing in the free software world (previously):



Debian
  • Created a proof-of-concept wrapper for pymysql to reduce the diff between Ubuntu and Debian's packaging of python-django. (tree)
  • Improved the NEW queue HTML report to display absolute timestamps when placing the cursor over relative times as well as to tidy the underlying HTML generation.
  • Tidied and pushed for the adoption of a patch against dak to also send mails to the signer of an uploaded package on security-master. (#796784)

LTS

This month I have been paid to work 14 hours on Debian Long Term Support (LTS). In that time I did the following:
  • "Frontdesk" duties, triaging CVEs, etc.
  • Improved the bin/lts-cve-triage.py script to ignore packages that have been marked as unsupported.
  • Improved the bin/contact-maintainers script to print a nicer error message if you mistype the package name.
  • Issued the following advisories:
    • DLA 541-1 for libvirt making the password policy consistent across the QEMU and VNC backends with respect to empty passwords.
    • DLA 574-1 for graphicsmagick fixing two denial-of-service vulnerabilities.
    • DLA 548-1 and DLA 550-1 for drupal7 fixing an open HTTP redirect vulnerability and a privilege escalation issue respectfully.
    • DLA 557-1 for dietlibc removing the current directory from the current path.
    • DLA 577-1 for redis preventing the redis-cli tool creating world-readable history files.

Uploads
  • redis:
    • 3.2.1-2 Avoiding race conditions in upstream test suite.
    • 3.2.1-3 Correcting world_readable ~/.rediscli_history files.
    • 3.2.1-4 Preventing a race condition in the previous upload's patch.
    • 3.2.2-1 New upstream release.
    • 3.2.1-4~bpo8+1 Backport to jessie-backports.
  • strip-nondeterminism:
    • 0.020-1 Improved the PNG handler to not blindly trust chunk sizes, rewriting most of the existing code.
    • 0.021-1 Correcting a regression in the PNG handler where it would leave temporary files in the generated binaries.
    • 0.022-1 Correcting a further regression in the PNG handler with respect to IEND chunk detection.
  • python-redis (2.10.5-1~bpo8+1) Backport to jessie-backports.
  • reprotest (0.2) Sponsored upload.

Patches contributed


I submitted patches to fix faulty initscripts in lm-sensors, rsync, sane-backends & vsftpd.

In addition, I submitted 7 patches to fix typos in debian/rules against cme:, gnugk: incorrect reference to dh_install_init, php-sql-formatter, python-django-crispy-forms, libhook-lexwrap-perl, mknbi & ruby-unf-ext.

I also submitted 6 patches to fix reproducible toolchain issues (ie. ensuring the output is reproducible rather than the package itself) against libextutils-parsexs-perl: Please make the output reproducible, perl, naturaldocs, python-docutils, ruby-ronn & txt2tags.

Lastly, I submitted 65 patches to fix specific reproducibility issues in amanda, boolector, borgbackup, cc1111, cfingerd, check-all-the-things, cobbler, ctop, cvs2svn, eb, eurephia, ezstream, feh, fonts-noto, fspy, ftplib, fvwm, gearmand, gngb, golang-github-miekg-pkcs11, gpick, gretl, hibernate, hmmer, hocr, idjc, ifmail, ironic, irsim, lacheck, libmemcached-libmemcached-perl, libmongoc, libwebsockets, minidlna, mknbi, nbc, neat, nfstrace, nmh, ntopng, pagekite, pavuk, proftpd-dfsg, pxlib, pysal, python-kinterbasdb, python-mkdocs, sa-exim, speech-tools, stressapptest, tcpflow, tcpreen, ui-auto, uisp, uswsusp, vtun, vtwm, why3, wit, wordgrinder, xloadimage, xmlcopyeditor, xorp, xserver-xorg-video-openchrome & yersinia.

RC bugs

I also filed 68 RC bugs for packages that access the internet during build against betamax, curl, django-localflavor, django-polymorphic, dnspython, docker-registry, elasticsearch-curator, elib.intl, elib.intl, elib.intl, fabulous, flask-restful, flask-restful, flask-restful, foolscap, gnucash-docs, golang-github-azure-go-autorest, golang-github-fluent-fluent-logger-golang, golang-github-franela-goreq, golang-github-mesos-mesos-go, golang-github-shopify-sarama, golang-github-unknwon-com, golang-github-xeipuuv-gojsonschema, htsjdk, lemonldap-ng, libanyevent-http-perl, libcommons-codec-java, libfurl-perl, libgravatar-url-perl, libgravatar-url-perl, libgravatar-url-perl, libgravatar-url-perl, libgravatar-url-perl, libhttp-async-perl, libhttp-oai-perl, libhttp-proxy-perl, libpoe-component-client-http-perl, libuv, libuv1, licenseutils, licenseutils, licenseutils, musicbrainzngs, node-oauth, node-redis, nodejs, pycurl, pytest, python-aiohttp, python-asyncssh, python-future, python-guacamole, python-latexcodec, python-pysnmp4, python-qtawesome, python-simpy, python-social-auth, python-structlog, python-sunlight, python-webob, python-werkzeug, python-ws4py, testpath, traitlets, urlgrabber, varnish-modules, webtest & zurl.


Finally, I filed 100 FTBFS bugs against abind, backup-manager, boot, bzr-git, cfengine3, chron, cloud-sptheme, cookiecutter, date, django-uwsgi, djangorestframework, docker-swarm, ekg2, evil-el, fasianoptions, fassets, fastinfoset, fest-assert, fimport, ftrading, gdnsd, ghc-testsuite, golang-github-magiconair-properties, golang-github-mattn-go-shellwords, golang-github-mitchellh-go-homedir, gplots, gregmisc, highlight.js, influxdb, jersey1, jflex, jhdf, kimwitu, libapache-htpasswd-perl, libconfig-model-itself-perl, libhtml-tidy-perl, liblinux-prctl-perl, libmoox-options-perl, libmousex-getopt-perl, libparanamer-java, librevenge, libvirt-python, license-reconcile, louie, mako, mate-indicator-applet, maven-compiler-plugin, mgt, mgt, mgt, misc3d, mnormt, nbd, ngetty, node-xmpp, nomad, perforate, pyoperators, pyqi, python-activipy, python-bioblend, python-cement, python-gevent, python-pydot-ng, python-requests-toolbelt, python-ruffus, python-scrapy, r-cran-digest, r-cran-getopt, r-cran-lpsolve, r-cran-rms, r-cran-timedate, resteasy, ruby-berkshelf-api-client, ruby-fog-libvirt, ruby-grape-msgpack, ruby-jquery-rails, ruby-kramdown-rfc2629, ruby-moneta, ruby-parser, ruby-puppet-forge, ruby-rbvmomi, ruby-redis-actionpack, ruby-unindent, ruby-web-console, scalapack-doc, scannotation, snow, sorl-thumbnail, svgwrite, systemd-docker, tiles-request, torcs, utf8proc, vagrant-libvirt, voms-api-java, wcwidth, xdffileio, xmlgraphics-commons & yorick.

FTP Team

As a Debian FTP assistant I ACCEPTed 114 packages: apertium-isl-eng, apertium-mk-bg, apertium-urd-hin, apprecommender, auto-apt-proxy, beast-mcmc, caffe, caffe-contrib, debian-edu, dh-make-perl, django-notification, dpkg-cross, elisp-slime-nav, evil-el, fig2dev, file, flightgear-phi, friendly-recovery, fwupd, gcc-5-cross, gdbm, gnustep-gui, golang-github-cznic-lldb, golang-github-dghubble-sling, golang-github-docker-leadership, golang-github-rogpeppe-fastuuid, golang-github-skarademir-naturalsort, golang-glide, gtk+2.0, gtranscribe, kdepim4, kitchen, lepton, libcgi-github-webhook-perl, libcypher-parser, libimporter-perl, liblist-someutils-perl, liblouis, liblouisutdml, libneo4j-client, libosinfo, libsys-cpuaffinity-perl, libtest2-suite-perl, linux, linux-grsec, lua-basexx, lua-compat53, lua-fifo, lua-http, lua-lpeg-patterns, lua-mmdb, lua-openssl, mash, mysql-5.7, node-quickselect, nsntrace, nvidia-graphics-drivers, nvidia-graphics-drivers-legacy-304xx, nvidia-graphics-drivers-legacy-340xx, openorienteering-mapper, oslo-sphinx, p4est, patator, petsc, php-mailparse, php-yaml, pykdtree, pypass, python-bioblend, python-cotyledon, python-jack-client, python-mido, python-openid-cla, python-os-api-ref, python-pydotplus, python-qtconsole, python-repoze.sphinx.autointerface, python-vispy, python-zenoss, r-cran-bbmle, r-cran-corpcor, r-cran-ellipse, r-cran-minpack.lm, r-cran-rglwidget, r-cran-rngtools, r-cran-scatterd3, r-cran-shinybs, r-cran-tibble, reproject, retext, ring, ruby-github-api, ruby-rails-assets-jquery-ui, ruby-swd, ruby-url-safe-base64, ruby-vmstat, ruby-webfinger, rustc, shadowsocks-libev, slepc, staticsite, steam, straight.plugin, svgwrite, tasksh, u-msgpack-python, ufo2otf, user-mode-linux, utf8proc, vizigrep, volk, wchartype, websockify & wireguard.

28 July 2016

Gunnar Wolf: Subtitling DebConf talks Come and join!

As I have said here a couple of times already, I am teaching a diploma course on embedded Linux at UNAM, and one of the modules I'm teaching (with Sandino Araico) is the boot process. We focus on ARM for obvious reasons, and while I have done my reading on the topic, I am very far from considering myself an expert. So, after attending Martin Michlmayr's Debian on ARM devices talk, I decided to do its subtitles as part of my teaching job. This talk gives a great panorama on what actually has to happen in order to get an ARM machine to boot, and how support for new ARM devices comes around to Linux in general and to Debian in particular Perfect for our topic! But my students are not always very fluent in English, so giving a hand is always most welcome. In case any of you dear readers didn't know, we have a DebConf subtitling team. Yes, our work takes much longer to reach the public, and we have no hopes whatsoever in getting it completed, but every person lending a hand and subtitling a talk that they thought was interesting helps a lot to improve our talks' usability. Even if you don't have enough time to do the whole talk (we are talking about some 6hr per 45 minute session), adding a bit of work is very very very welcome. So... Enjoy And thanks in advance for your work!

24 July 2016

Gregor Herrmann: RC bugs 2016/01-29

seems I've neglected both my blog & my RC bug fixing activities in the last months. anyway, since I still keep track of RC bugs I worked on, I thought I might as well publish the list:

10 July 2016

Bits from Debian: New Debian Developers and Maintainers (May and June 2016)

The following contributors got their Debian Developer accounts in the last two months: The following contributors were added as Debian Maintainers in the last two months: Congratulations!

27 June 2016

John Goerzen: I m switching from git-annex to Syncthing

I wrote recently about using git-annex for encrypted sync, but due to a number of issues with it, I ve opted to switch to Syncthing. I d been using git-annex with real but noncritical data. Among the first issues I noticed was occasional but persistent high CPU usage spikes, which once started, would persist apparently forever. I had an issue where git-annex tried to replace files I d removed from its repo with broken symlinks, but the real final straw was a number of issues with the gcrypt remote repos. git-remote-gcrypt appears to have a number of issues with possible race conditions on the remote, and at least one of them somehow caused encrypted data to appear in a packfile on a remote repo. Why there was data in a packfile there, I don t know, since git-annex is supposed to keep the data out of packfiles. Anyhow, git-annex is still an awesome tool with a lot of use cases, but I m concluding that live sync to an encrypted git remote isn t quite there yet enough for me. So I looked for alternatives. My main criteria were supporting live sync (via inotify or similar) and not requiring the files to be stored unencrypted on a remote system (my local systems all use LUKS). I found Syncthing met these requirements. Syncthing is pretty interesting in that, like git-annex, it doesn t require a centralized server at all. Rather, it forms basically a mesh between your devices. Its concept is somewhat similar to the proprietary Bittorrent Sync basically, all the nodes communicate about what files and chunks of files they have, and the changes that are made, and immediately propagate as much as possible. Unlike, say, Dropbox or Owncloud, Syncthing can actually support simultaneous downloads from multiple remotes for optimum performance when there are many changes. Combined with syncthing-inotify or syncthing-gtk, it has immediate detection of changes and therefore very quick propagation of them. Syncthing is particularly adept at figuring out ways for the nodes to communicate with each other. It begins by broadcasting on the local network, so known nearby nodes can be found directly. The Syncthing folks also run a discovery server (though you can use your own if you prefer) that lets nodes find each other on the Internet. Syncthing will attempt to use UPnP to configure firewalls to let it out, but if that fails, the last resort is a traffic relay server again, a number of volunteers host these online, but you can run your own if you prefer. Each node in Syncthing has an RSA keypair, and what amounts to part of the public key is used as a globally unique node ID. The initial link between nodes is accomplished by pasting the globally unique ID from one node into the add node screen on the other; the user of the first node then must accept the request, and from that point on, syncing can proceed. The data is all transmitted encrypted, of course, so interception will not cause data to be revealed. Really my only complaint about Syncthing so far is that, although it binds to localhost, the web GUI does not require authentication by default. There is an ITP open for Syncthing in Debian, but until then, their apt repo works fine. For syncthing-gtk, the trusty version of the webupd8 PPD works in Jessie (though be sure to pin it to a low priority if you don t want it replacing some unrelated Debian packages).

18 June 2016

Manuel A. Fernandez Montecelo: More work on aptitude

The last few months have been a bit of a crazy period of ups and downs, with a tempest of events beneath the apparent and deceivingly calm surface waters of being unemployed (still at it). The daily grind Chief activities are, of course, those related to the daily grind of job-hunting, sending applications, and preparing and attending interviews. It is demoralising when one searches for many days or weeks without seeing anything suitable for one's skills or interests, or other more general life expectations. And it takes a lot of time and effort to put one's best in the applications for positions that one is really, really, interested in. And even for the ones which are meh, for a variety of reasons (e.g. one is not very suitable for what the offer demands). After that, not being invited to interviews (or doing very badly at them) is bad, of course, but quick and not very painful. A swift, merciful end to the process. But it's all the more draining when waiting for many weeks when not a few months with the uncertainty of not knowing if one is going to be lucky enough to be summoned for an interview; harbouring some hope one has to appear enthusiastic in the interviews, after all , while trying to keep it contained lest it grows too much ; then in the interview hearing good words and some praises, and feeling the impression that one will fit in, that one did nicely and that chances are good letting the hope grow again ; start to think about life changes that the job will require to make a quick decision should the offer finally arrives ; perhaps make some choices and compromises based on the uncertain result; then wait for a week or two after the interview to know the result... ... only to end up being unsuccessful. All the effort and hopes finally get squashed with a cold, short email or automatic response, or more often than not, complete radio silence from prospective employers, as an end to a multi-month-long process. An emotional roller coaster [1], which happened to me several times in the last few months. All in a day's work The months of preparing and waiting for a new job often imply an impasse that puts many other things that one cares about on hold, and one makes plans that will never come to pass. All in a day's (half-year's?) work of an unemployed poor soul. But not all is bad. This period was also a busy time doing some plans about life, mid- and long-term; the usual and some really unusual! family events; visits to and from friends, old and new; attending nice little local Debian gatherings or the bigger gathering of Debian SunCamp2016, and other work for side projects or for other events that will happen soon... And amidst all that, I managed to get some work done on aptitude. Two pictures worth (less than) a thousand bugs To be precise, worth 709 bugs 488 bugs in the first graph, plus 221 in the second. In 2015-11-15 (link to the post Work on aptitude): aptitude BTS Graph, 2015-11-15 In 2016-06-18: aptitude BTS Graph, 2016-06-18 Numbers The BTS numbers for aptitude right now are: Highlights Beyond graphs and stats, I am specially happy about two achievements in the last year:
  1. To have aptitude working today, first and foremost Apart from the abandon that suffered in previous years, I mean specifically the critical step of getting it through the troubles of the last summer, with the GCC-5/C++11 transition in parallel with a transition of the Boost library (explained in more detail in Work on aptitude). Without that, possibly aptitude would not have survived until today.
  2. Improvements to the suggestions of the resolver In the version 0.8, there were a lot of changes related with improving the order of the suggestions from the resolver, when it finds conflicts or other problems with the planned actions. Historically, but specially in the last few years, there have been many complaints about the nonsensical or dangerous suggestions from the resolver. The first solution offered by the resolver was very often regarded as highly undesirable (for example, removal of many packages), and preferable solutions like upgrades of one or only a handful of packages being offered only after many removals; and keeps only offered as last resort.
Perhaps these changes don't get a lot of attention, given that in the first case it's just to keep working (with few people realising that it could have collapsed on the spot, if left unattended), and the second can probably go unnoticed because it just works or it started to work more smoothly doesn't get as much immediate attention as it suddenly broke! . Still, I wanted to mention them, because I am quite proud of those. Thanks Even if I put a lot of work on aptitude in the last year, the results of the graph and numbers have not been solely achieved by me. Special thanks go to Axel Beckert (abe / XTaran) and the apt team, David Kalnischkies and Julian Andres Klode who, despite the claim in that page, does not mostly work python-apt anymore... but also in the main tools. They help with fixing some of the issues directly, or changing things in apt that benefit aptitude, testing changes, triaging bugs or commenting on them, patiently explaining to me why something in libapt doesn't do what I think it does, and good company in general. Not the least, for holding impromptu BTS group therapy / support meetings, for those cases when prolonged exposure to BTS activity starts to induce very bad feelings. Thanks also to people who sent their translation updates, notified about corrections, sent or tested patches, submitted bugs, or tried to help in other ways. Change logs for details. Notes [1] ^ It's even an example in the Cambridge Dictionaries Online website, for the entry of roller coaster:
He was on an emotional roller coaster for a while when he lost his job.

Next.

Previous.